与传统的卷积神经网络(CNN)和视觉变压器不同,多层默认(MLP)是一种新的视觉模型,具有极其简单的架构,其仅由完全连接的层堆叠。 Vision MLP的输入图像通常被分成多个令牌(补丁),而现有的MLP模型直接用固定权重聚合它们,忽略来自不同图像的令牌的变化语义信息。为了动态聚合令牌,我们建议将每个令牌代表为具有两个部分,幅度和相位的波函数。幅度是原始特征,并且相位项是根据输入图像的语义内容改变的复值。介绍相位项可以动态调制MLP中令牌和固定权重之间的关系。基于波浪状令牌表示,我们建立了一种用于视觉任务的新型波-MLP架构。广泛的实验表明,所提出的波-MLP优于各种视觉任务的最先进的MLP架构,例如图像分类,对象检测和语义分割。
translated by 谷歌翻译
脑电图(EEG)是情绪识别的流行和有效工具。但是,研究人员仍然晦涩难懂,人脑中脑电图中脑电图的传播机制及其与情绪的内在相关性仍然晦涩难懂。这项工作提出了四个变体变压器框架〜(空间注意力,暂时关注,顺序的时空注意力和同时的空间临时注意),以探索情感与空间 - 周期性的EEG特征之间的关系。具体而言,空间注意力和时间关注是分别学习拓扑结构信息和时间变化的脑电图特征。顺序的时空注意力在一秒钟的段中引起空间注意力,并在一个样本中依次在一个样本中注意,以探索情绪刺激对同一时间段中不同EEG电极EEG电极的EEG信号的影响程度。同时进行空间和时间关注的同时时空注意力同时进行,用于模拟不同时间段中不同空间特征之间的关系。实验结果表明,同时的时空注意力会导致设计选择中的最佳情感识别精度,这表明建模EEG信号的空间和时间特征的相关性对于情绪识别是重要的。
translated by 谷歌翻译
The stock market prediction has been a traditional yet complex problem researched within diverse research areas and application domains due to its non-linear, highly volatile and complex nature. Existing surveys on stock market prediction often focus on traditional machine learning methods instead of deep learning methods. Deep learning has dominated many domains, gained much success and popularity in recent years in stock market prediction. This motivates us to provide a structured and comprehensive overview of the research on stock market prediction focusing on deep learning techniques. We present four elaborated subtasks of stock market prediction and propose a novel taxonomy to summarize the state-of-the-art models based on deep neural networks from 2011 to 2022. In addition, we also provide detailed statistics on the datasets and evaluation metrics commonly used in the stock market. Finally, we highlight some open issues and point out several future directions by sharing some new perspectives on stock market prediction.
translated by 谷歌翻译
使用脚压力/力量测量硬件和运动捕获(MOCAP)技术对人类稳定的定量评估昂贵,耗时,并且仅限于实验室(基于实验室)。我们提出了一种基于图像的新方法来估计稳定计算的三个关键组件:质量中心(COM),支持基础(BOS)和压力中心(COP)。此外,我们通过使用公共可用的多模式(MOCAP,脚压力,2视频视频),定量验证基于图像的方法来计算两种经典稳定度量,以直接从基于实验室的感觉输出(地面真相)产生的方法来计算两种经典稳定性措施。十个受试者人类运动数据集。我们的实验结果表明:1)我们的COM估计方法(COMNET)始终优于最先进的基于惯性传感器的COM估计技术; 2)我们基于图像的方法与单独的鞋垫脚压结合,与地面真相稳定性度量产生一致且具有统计学意义的相关性(comtocop r = 0.79 p <0.001,comtobos r = 0.75 p <0.001); 3)我们完全基于图像的稳定性度量估计在两个稳定性指标上产生一致,正且具有统计学意义的相关性(ComtoCop r = 0.31 P <0.001,comtobos r = 0.22 P <0.001)。我们的研究为自然环境中的稳定性计算和监测提供了有希望的定量证据。
translated by 谷歌翻译
认识到人类的感情在日常沟通中发挥着关键作用。神经科学已经证明,不同的情绪状态存在于不同脑区,脑电图频带和颞戳中不同程度的激活。在本文中,我们提出了一种新颖的结构来探索情感认可的信息脑电图。所提出的模块,由PST-Integn表示,由位置,光谱和颞件注意力模块组成,用于探索更多辨别性EEG特征。具体地,位置注意模块是捕获在空间尺寸中的不同情绪刺激的激活区域。光谱和时间注意力模块分别分配不同频带和时间片的权重。我们的方法是自适应的,也可以符合其作为插入式模块的3D卷积神经网络(3D-CNN)。我们在两个现实世界数据集进行实验。 3D-CNN结合我们的模块实现了有希望的结果,并证明了PST-关注能够从脑电图中捕获稳定的情感识别模式。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译